Classification Rule for Small Samples: A Bootstrap Approach

نویسندگان

  • M. M. Rahman
  • M. M. Hossain
  • A. K. Majumder
چکیده

#1 Dept. of Statistics, Islamic University, Kushtia, Bangladesh, Phone no. +8801718698811 #2 Dept. of Statistics, Islamic University, Kushtia, Bangladesh, Phone no. +8801716657066 #3 Dept. of Statistics, Jahangirnagar University, Savar, Dhaka, Bangladesh, Phone no. +8801711145041 ABSTRACT In a recent year, classification is computer implemented and most popular data mining technique. Thus in this paper, we address the issue of classification errors over small samples and propose a new Bootstrap based approach for quantifying the level of classification errors. We investigate the performances of classification techniques and observed that, Bootstrap based classification techniques significantly reduce the classification errors than the usual techniques of small samples. Thus, this paper proposes to apply classification techniques under Bootstrap approach for classifying objects in case of small samples.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

ipred : Improved Predictors

In classification problems, there are several attempts to create rules which assign future observations to certain classes. Common methods are for example linear discriminant analysis or classification trees. Recent developments lead to substantial reduction of misclassification error in many applications. Bootstrap aggregation (“bagging”, Breiman, 1996a) combines classifiers trained on bootstr...

متن کامل

Estimating misclassification error with small samples via bootstrap cross-validation

MOTIVATION Estimation of misclassification error has received increasing attention in clinical diagnosis and bioinformatics studies, especially in small sample studies with microarray data. Current error estimation methods are not satisfactory because they either have large variability (such as leave-one-out cross-validation) or large bias (such as resubstitution and leave-one-out bootstrap). W...

متن کامل

Superior feature-set ranking for small samples using bolstered error estimation

MOTIVATION Ranking feature sets is a key issue for classification, for instance, phenotype classification based on gene expression. Since ranking is often based on error estimation, and error estimators suffer to differing degrees of imprecision in small-sample settings, it is important to choose a computationally feasible error estimator that yields good feature-set ranking. RESULTS This pap...

متن کامل

A comparison of bootstrap methods and an adjusted bootstrap approach for estimating prediction error in microarray classification Short title: Bootstrap Prediction Error Estimation

SUMMARY This paper first provides a critical review on some existing methods for estimating prediction error in classifying microarray data where the number of genes greatly exceeds the number of specimen. Special attention is given to the bootstrap-related methods. When the sample size n is small, we find that all the reviewed methods suffer from either substantial bias or variability. We intr...

متن کامل

On the Choice of m in the m Out of n Bootstrap and its Application to Confidence Bounds for Extreme Percentiles

The m out of n bootstrap Bickel et al. [1997], Politis and Romano [1994] is a modification of the ordinary bootstrap which can rectify bootstrap failure when the bootstrap sample size is n. The modification is to take bootstrap samples of size m where m → ∞ and m/n→ 0. The choice of m is an important matter, in general. In this paper we consider an adaptive rule proposed by Bickel, Götze and va...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2013